Goto

Collaborating Authors

 ai governance framework


Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia

Kulothungan, Vikram, Gupta, Deepti

arXiv.org Artificial Intelligence

--Artificial intelligence (AI) trends vary significantly across global regions, shaping the trajectory of innovation, regulation, and societal impact. This variation influences how dif - ferent regions approach AI development, balancing technological progress with ethical and regulatory considerations. This study conducts a comparative analysis of AI trends in the United States (US), the European Union (EU), and Asia, focusing on three key dimensions: generative AI, ethical oversight, and industrial applications. The US prioritizes market -driven innovation with minimal regulatory constraints, the EU enforces a precautionary risk -based framework emphasizing ethical safeguards, and Asia employs state -guided AI strategies that balance rapid deployment with regulatory oversight. Although these approaches reflect different economic models and policy priorities, their divergence poses challenges to international collaboration, regulatory harmonization, and the development of global AI standards. To address these challenges, this paper synthesizes regional strengths to propose an adaptive AI governance framework that integrates risk -tiered oversight, innovation accelerators, and strategic alignment mechanisms. By bridging governance gaps, this study offers actionable insights for fostering responsible AI development while ensuring a balance between technological progress, ethical imperatives, and regulatory coherence. Artificial intelligence (AI) has emerged as a transformative force in the 21st century, reshaping industries, governance structures, and societal interactions at an unprecedented pace. From generative AI creating human - like text and images to autonomous systems revolutionizing healthcare, finance, and manufacturing, AI's influence is profound and far - reaching.


The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems

Mokander, Jakob, Sheth, Margi, Watson, David, Floridi, Luciano

arXiv.org Artificial Intelligence

Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems found in previous literature use one of three mental model. The Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics. The Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose. And the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, data input, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the conceptual tools needed to operationalise AI governance in practice.


Use an AI governance framework to surmount challenges

#artificialintelligence

The hype around AI makes you think otherwise, but AI is still in its early stages of enterprise adoption. So, governance challenges reflect that. The first of these challenges is limited expertise. The need for AI governance and how it affects the success of AI projects is still not widely understood. Lack of governance expertise is not unique to end-user organizations.


The four keys to trustworthy AI - Watson Blog

#artificialintelligence

Artificial intelligence is a major factor in people’s lives. It influences who gets a loan, how companies hire and compensate employees, how customers are treated, even where infrastructure and aid are allocated. It is already deeply embedded in our businesses, organizations and governments, including the 40,000 client engagements with IBM Watson across 20 industries in 80 countries. As the world increasingly relies on AI to help make major predictions and decisions, it becomes essential that people can trust the process and results of that AI. IBM is working on building that trust. Organizations that neglect their ethical duties in AI can face lawsuits, regulatory fines, angry customers, embarrassment, reputational damage, and destruction of shareholder value. For example, consider the fairness of your organization’s hiring practices. If your HR department uses an existing machine-learning-based application to score prospective employees, how do you ensure trustworthy implementation of this technology? From a technical perspective, governed data and AI technology should…


Selected Readings on the Use of Artificial Intelligence in the Public Sector

#artificialintelligence

The Living Library's Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works focuses on algorithms and artificial intelligence in the public sector. As Artificial Intelligence becomes more developed, governments have turned to it to improve the speed and quality of public sector service delivery, among other objectives. Below, we provide a selection of recent literature that examines how the public sector has adopted AI to serve constituents and solve public problems. While the use of AI in governments can cut down costs and administrative work, these technologies are often early in development and difficult for organizations to understand and control with potential harmful effects as a result.


Watchdog finds the Pentagon needs to improve artificial intelligence project management

#artificialintelligence

Poor management of artificial intelligence projects in the Department of Defense could erode the United States' competitive advantage in the emerging technology, the Defense Department's watchdog warned in a July 1 report. The DoD inspector general suggested the Joint Artificial Intelligence Center, established to facilitate the adoption of artificial intelligence tools across the department, take several steps to improve project management, including determining a standard definition of artificial intelligence, improving data sharing and developing a process to accurately track artificial intelligence programs. The JAIC missed a March 2020 deadline to release a governance framework. It still plans to do so, according to the report, but that date is redacted in the report. The inspector general started the audit to determine the gaps and weaknesses in the department's enterprise-wide AI governance, the responsibility of the JAIC.


Singapore releases Asia's first AI governance framework

#artificialintelligence

The Singapore government has released an artificial intelligence (AI) governance framework to help businesses tackle the ethical and governance challenges arising from the growing use of AI across industries. Want to know what will dominate the world of IT in 2019? You forgot to provide an Email Address. This email address doesn't appear to be valid. This email address is already registered.


How Singapore aims to ensure consumer trust in Artificial Intelligence

#artificialintelligence

Singapore's Info-communications Media Development Authority (IMDA) recently announced the creation of an Advisory Council on Ethical Use of AI and Data as part of an effort to bring together a range of key stakeholders to inform the government on possible approaches to ensure consumer trust in AI-powered products and services. ITU News recently caught up with IMDA's Assistant Chief Executive of Data Innovation and Protection, Yeong Zee Kin, to learn more about Singapore's approach to this important and timely issue. With the recent launch of the Digital Economy Framework for Action, Singapore has entered a new phase of its digitalisation journey. The ability to use and share data innovatively and responsibly can become a competitive advantage for businesses. Infusing AI into business operations can accelerate digital transformation through new features and functionalities.